Amy Karle, "BioAI-Formed Mycelium" (2023)

To Reckon with Generative AI, Make It a Public Problem

Often, problems that seem narrow and purely technical are best tackled if they’re recast as “public problems,” a concept put forth almost a century ago by philosopher and educator John Dewey. Examples of public problems include dirty air, polluted water, global warming, and childhood education. Public problems bring harms that are not always felt individually but that nonetheless shape what it means to be a thriving person in a thriving society. These problems need to be noticed, discussed, and collectively managed. In contrast to problems that are personal, private, or technical, Dewey wrote, public problems happen when people experience “indirect consequences” that need to be collectively and “systematically cared for,” regardless of an individual’s circumstance, wealth, privilege, or interests. Public problems define our shared realities.

Although generative AI has been framed as a technical problem, recasting it as a public problem offers new avenues for action. Generative AI is quickly becoming a language for telling society’s collective stories and teaching us about each other. If you ask generative AI to make a story or video that explains climate change, you are actually asking a probabilistic machine learning model to create a statistically acceptable account of a public problem. Tools such as ChatGPT and Midjourney are fast becoming languages for understanding public problems, but with little analysis of their power to shape the stories that humans use to understand the shared consequences that Dewey told us create public life.

All members of society should reject the assertions of technology companies and AI “godfathers” who claim that generative AI is both an existential threat and a problem that only technologists can manage. Public problems are collectively debated, accounted for, and managed.

To grapple with generative AI effectively, consumers and developers alike need to see it not only as biased datasets and machine learning run amok—we need to see it as a fast-emerging language that people are using to learn, make sense of their worlds, and communicate with others. In other words, it needs to be seen as a public problem.

First, researchers need to see generative AI as a powerful language—as “boundaries,” “infrastructures,” and “hinges” that scholars of science and technology tell us create technologies. This means tracing the connections among the people and machines that make synthetic language: engineers who build machine learning systems, for example, entrepreneurs who pitch business models, journalists who make synthetic news stories, audiences who struggle to know what to believe. These are the complex and largely invisible assumptions that make generative AI a language for representing knowledge, fueling innovation, telling stories, and creating shared realities.

Second, as a society, we need to analyze the harms created by generative AI. When statistical hallucinations invent facts, chatbots misattribute authorship, or computational summaries bungle analyses, they produce dangerously wrong language that has all the confidence of a seemingly neutral, computational certainty. These errors are not just rare and idiosyncratic curiosities of misinformation; their real and imagined existence makes people see media as unstable, unreliable, and untrusted. Society’s information sources—and ability to gauge reality—are destabilized.

Finally, all members of society should reject the assertions of technology companies and AI “godfathers” who claim that generative AI is both an existential threat and a problem that only technologists can manage. Public problems are collectively debated, accounted for, and managed; they are not the purview of private companies or self-identified caretakers who work on their own timelines with proprietary knowledge. Truly public problems are never outsourced to private interests or charismatic authorities.

A public problem is not merely a technical curiosity, a moral panic, or an inevitable future. It is a system of relationships between people and machines that creates language, makes mistakes, and needs to be systematically cared for. Once we understand generative AI as a vital language for creating shared realities and tackling collective challenges, we can start to see it as a public problem, and then we will be in a better place to solve it.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Ananny, Mike. “To Reckon with Generative AI, Make It a Public Problem.” Issues in Science and Technology (): 88. https://doi.org/10.58875/EHNY5426